35 research outputs found

    Social Capital and Mental Health among Chinese Older Adults

    Get PDF
    This study aims to develop our understanding of social capital by exploring the dimensions and profiles of social capital among Chinese older adults and the factors and health conditions associated with social capital in later life. The approach was secondary analyses of the China Family Panel Study (CFPS), a nationally representative survey of the Chinese population. It is the intent of this work to offer applicable guidance on policy and project design regarding health promotion for the aging population through social intervention. Aim 1 explores the dimensions and associative factors of social capital of Chinese older adults. It was found that the social capital of Chinese older adults was derived from three levels of social environment---family, community and the macro society. Older adults who relied heavily on family-level social capital may be constrained in their capacity to seek resources and social supports outside their immediate family. The physical community environment plays an influential determinant of social capital. Aim 2 identifies three distinct social capital profiles among Chinese older adults: Family-centered, Moderate and Diverse social capital. The use of individual-based categorization contributes to our understanding as it better captures the reality of older adults engaging in various social relationships and provides valuable insights into the complex interaction between aspects of social capital and heterogeneous older groups. The results suggest that family is still a salient source of social capital for Chinese older adults, while a deficiency in community-level social capital is faced by many older people. The findings also highlight the vulnerability of the Family-centered group whose access to all forms of social capital was limited. Results suggest that supporting communities to improve the physical environment and developing social capital interventions targeting older adults could be effective strategies to prevent depressive symptoms and promote Chinese older adults’ overall wellbeing. Aim 3 examines the mediating role of community social capital underlying the link between the built neighborhood environment and depressive symptoms among urban and rural older adults. It also explores the moderating role of other sources of social capital. Results suggest that the interaction between the built and social neighborhood environments is related to depressive symptoms for urban and rural older adults in later life. Other levels of social environment also played a role in this process, but the effects have differed in rural/urban areas. Supporting rural and urban communities with physical infrastructure and service availability and developing social capital interventions targeting older adults could be effective strategies to prevent depressive symptoms and promote Chinese older adults’ overall wellbeing. In summary, the results of this present study show that social capital, as an interaction between the actor and the multiple levels of the social environment, was derived from different environmental levels, including families, communities and the broader society. In the Chinese context, family is still an important source of social capital for older adults despite increasing reliance and sometimes preference for formal support. Meanwhile, older adults’ social capital is highly connected with the community they live in. The findings from this dissertation expand our understanding of social capital by integrating the ecological perspective and addressing limitations introduced by viewing social capital by single indicators. The methodology used to identify social capital compositions and profiles in this dissertation provides a more stable and accurate base for studying the complex social relationship of a “whole” person in real life. It is also suggested that health promoting interventions should incorporate environmental components, and use social capital building as a crucial pathway in health-promoting. Since community social capital plays an increasingly important role in maintaining older people\u27s mental health, community development should be prioritized in health-promoting programs for the older population

    Demystifying RCE Vulnerabilities in LLM-Integrated Apps

    Full text link
    In recent years, Large Language Models (LLMs) have demonstrated remarkable potential across various downstream tasks. LLM-integrated frameworks, which serve as the essential infrastructure, have given rise to many LLM-integrated web apps. However, some of these frameworks suffer from Remote Code Execution (RCE) vulnerabilities, allowing attackers to execute arbitrary code on apps' servers remotely via prompt injections. Despite the severity of these vulnerabilities, no existing work has been conducted for a systematic investigation of them. This leaves a great challenge on how to detect vulnerabilities in frameworks as well as LLM-integrated apps in real-world scenarios. To fill this gap, we present two novel strategies, including 1) a static analysis-based tool called LLMSmith to scan the source code of the framework to detect potential RCE vulnerabilities and 2) a prompt-based automated testing approach to verify the vulnerability in LLM-integrated web apps. We discovered 13 vulnerabilities in 6 frameworks, including 12 RCE vulnerabilities and 1 arbitrary file read/write vulnerability. 11 of them are confirmed by the framework developers, resulting in the assignment of 7 CVE IDs. After testing 51 apps, we found vulnerabilities in 17 apps, 16 of which are vulnerable to RCE and 1 to SQL injection. We responsibly reported all 17 issues to the corresponding developers and received acknowledgments. Furthermore, we amplify the attack impact beyond achieving RCE by allowing attackers to exploit other app users (e.g. app responses hijacking, user API key leakage) without direct interaction between the attacker and the victim. Lastly, we propose some mitigating strategies for improving the security awareness of both framework and app developers, helping them to mitigate these risks effectively

    ACETest: Automated Constraint Extraction for Testing Deep Learning Operators

    Full text link
    Deep learning (DL) applications are prevalent nowadays as they can help with multiple tasks. DL libraries are essential for building DL applications. Furthermore, DL operators are the important building blocks of the DL libraries, that compute the multi-dimensional data (tensors). Therefore, bugs in DL operators can have great impacts. Testing is a practical approach for detecting bugs in DL operators. In order to test DL operators effectively, it is essential that the test cases pass the input validity check and are able to reach the core function logic of the operators. Hence, extracting the input validation constraints is required for generating high-quality test cases. Existing techniques rely on either human effort or documentation of DL library APIs to extract the constraints. They cannot extract complex constraints and the extracted constraints may differ from the actual code implementation. To address the challenge, we propose ACETest, a technique to automatically extract input validation constraints from the code to build valid yet diverse test cases which can effectively unveil bugs in the core function logic of DL operators. For this purpose, ACETest can automatically identify the input validation code in DL operators, extract the related constraints and generate test cases according to the constraints. The experimental results on popular DL libraries, TensorFlow and PyTorch, demonstrate that ACETest can extract constraints with higher quality than state-of-the-art (SOTA) techniques. Moreover, ACETest is capable of extracting 96.4% more constraints and detecting 1.95 to 55 times more bugs than SOTA techniques. In total, we have used ACETest to detect 108 previously unknown bugs on TensorFlow and PyTorch, with 87 of them confirmed by the developers. Lastly, five of the bugs were assigned with CVE IDs due to their security impacts.Comment: Accepted by ISSTA 202

    Jailbreaker: Automated Jailbreak Across Multiple Large Language Model Chatbots

    Full text link
    Large Language Models (LLMs) have revolutionized Artificial Intelligence (AI) services due to their exceptional proficiency in understanding and generating human-like text. LLM chatbots, in particular, have seen widespread adoption, transforming human-machine interactions. However, these LLM chatbots are susceptible to "jailbreak" attacks, where malicious users manipulate prompts to elicit inappropriate or sensitive responses, contravening service policies. Despite existing attempts to mitigate such threats, our research reveals a substantial gap in our understanding of these vulnerabilities, largely due to the undisclosed defensive measures implemented by LLM service providers. In this paper, we present Jailbreaker, a comprehensive framework that offers an in-depth understanding of jailbreak attacks and countermeasures. Our work makes a dual contribution. First, we propose an innovative methodology inspired by time-based SQL injection techniques to reverse-engineer the defensive strategies of prominent LLM chatbots, such as ChatGPT, Bard, and Bing Chat. This time-sensitive approach uncovers intricate details about these services' defenses, facilitating a proof-of-concept attack that successfully bypasses their mechanisms. Second, we introduce an automatic generation method for jailbreak prompts. Leveraging a fine-tuned LLM, we validate the potential of automated jailbreak generation across various commercial LLM chatbots. Our method achieves a promising average success rate of 21.58%, significantly outperforming the effectiveness of existing techniques. We have responsibly disclosed our findings to the concerned service providers, underscoring the urgent need for more robust defenses. Jailbreaker thus marks a significant step towards understanding and mitigating jailbreak threats in the realm of LLM chatbots

    Understanding Large Language Model Based Fuzz Driver Generation

    Full text link
    Fuzz drivers are a necessary component of API fuzzing. However, automatically generating correct and robust fuzz drivers is a difficult task. Compared to existing approaches, LLM-based (Large Language Model) generation is a promising direction due to its ability to operate with low requirements on consumer programs, leverage multiple dimensions of API usage information, and generate human-friendly output code. Nonetheless, the challenges and effectiveness of LLM-based fuzz driver generation remain unclear. To address this, we conducted a study on the effects, challenges, and techniques of LLM-based fuzz driver generation. Our study involved building a quiz with 86 fuzz driver generation questions from 30 popular C projects, constructing precise effectiveness validation criteria for each question, and developing a framework for semi-automated evaluation. We designed five query strategies, evaluated 36,506 generated fuzz drivers. Furthermore, the drivers were compared with manually written ones to obtain practical insights. Our evaluation revealed that: while the overall performance was promising (passing 91% of questions), there were still practical challenges in filtering out the ineffective fuzz drivers for large scale application; basic strategies achieved a decent correctness rate (53%), but struggled with complex API-specific usage questions. In such cases, example code snippets and iterative queries proved helpful; while LLM-generated drivers showed competent fuzzing outcomes compared to manually written ones, there was still significant room for improvement, such as incorporating semantic oracles for logical bugs detection.Comment: 17 pages, 14 figure

    Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study

    Full text link
    Large Language Models (LLMs), like ChatGPT, have demonstrated vast potential but also introduce challenges related to content constraints and potential misuse. Our study investigates three key research questions: (1) the number of different prompt types that can jailbreak LLMs, (2) the effectiveness of jailbreak prompts in circumventing LLM constraints, and (3) the resilience of ChatGPT against these jailbreak prompts. Initially, we develop a classification model to analyze the distribution of existing prompts, identifying ten distinct patterns and three categories of jailbreak prompts. Subsequently, we assess the jailbreak capability of prompts with ChatGPT versions 3.5 and 4.0, utilizing a dataset of 3,120 jailbreak questions across eight prohibited scenarios. Finally, we evaluate the resistance of ChatGPT against jailbreak prompts, finding that the prompts can consistently evade the restrictions in 40 use-case scenarios. The study underscores the importance of prompt structures in jailbreaking LLMs and discusses the challenges of robust jailbreak prompt generation and prevention

    Prompt Injection attack against LLM-integrated Applications

    Full text link
    Large Language Models (LLMs), renowned for their superior proficiency in language comprehension and generation, stimulate a vibrant ecosystem of applications around them. However, their extensive assimilation into various services introduces significant security risks. This study deconstructs the complexities and implications of prompt injection attacks on actual LLM-integrated applications. Initially, we conduct an exploratory analysis on ten commercial applications, highlighting the constraints of current attack strategies in practice. Prompted by these limitations, we subsequently formulate HouYi, a novel black-box prompt injection attack technique, which draws inspiration from traditional web injection attacks. HouYi is compartmentalized into three crucial elements: a seamlessly-incorporated pre-constructed prompt, an injection prompt inducing context partition, and a malicious payload designed to fulfill the attack objectives. Leveraging HouYi, we unveil previously unknown and severe attack outcomes, such as unrestricted arbitrary LLM usage and uncomplicated application prompt theft. We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection. 10 vendors have validated our discoveries, including Notion, which has the potential to impact millions of users. Our investigation illuminates both the possible risks of prompt injection attacks and the possible tactics for mitigation
    corecore